Conversation
# Why Add cookie consent functionality to the Expo documentation site to comply with privacy regulations. This integrates the `@expo/styleguide-cookie-consent` package to manage user consent for analytics and marketing cookies. > [!NOTE] > CI forced me to lint and format a bunch of files that were totally unrelated to my work, but wouldn't build otherwise. > [!IMPORTANT] > **Should be merged after it landed on Expo.dev, as it has absolute URLs to new pages that need to deploy first.** # How - Added `@expo/styleguide-cookie-consent` package to dependencies - Integrated `CookieConsentProvider` in the app layout - Refactored analytics implementation to respect user consent preferences: - Replaced `AnalyticsProvider` with `useAnalyticsPageTracking` hook - Modified analytics functions to check consent status before tracking - Added event queueing for pending consent - Implemented replay functionality to process queued events when consent is granted - Updated the newsletter signup to respect marketing consent - Added privacy choices button to the footer # Test Plan - Verified the cookie consent banner appears on first visit - Confirmed analytics events are only sent when consent is granted - Tested that the privacy choices button in the footer opens the consent management UI - Ensured newsletter signup respects marketing consent settings # Checklist - [x] Conforms with the [Documentation Writing Style Guide](https://github.com/expo/expo/blob/main/guides/Expo%20Documentation%20Writing%20Style%20Guide.md)
…ons (#42947) # Why We currently hold on to a lot of memory globally due to the `memoize` function. The idea was to temporarily limit this usage by a `MAX_SIZE`. However, this still means that the new `expo-modules-autolinking` implementation has a higher memory impact than it used to. Instead, we'd like to free up memory after a full run, while making sure that the entire async context shares a cache. This will allow us to use `memoize` in more places and also ensure that we don't hold on to memory for longer than we need to. (**Note:** This was an optimization that @byCedric and I talked about, but didn't immediately implement) # How Initially, the `createMemoizer` used an `AsyncLocalStorage`. This basically kept the runtime of `expo-modules-autolinking verify` within a neutral range for `apps/router-e2e` (+/-10ms). However, it's expected that we should be able to reduce more cacheable calls with this approach. Swapping the memoizer out for a globally shared one that is instead reference counting its users, we see a drop in runtime of -20ms/-8% (measured with Hyperfine). The memoizer is an instance that holds on to our function result cache (`Map<Fn, Map<string, any>>`). It still adheres to the `MAX_SIZE` limit per function. Every function that's wrapped with `memoize` checks the `currentMemoizer` for a `Memoizer.call` wrapper. The `Memoizer.withMemoizer` wrapper counts up the ref counter, then proceeds with the callback logic, like `AsyncLocalStorage.run`. After a run, if the ref counter drops to zero, it resets to `currentMemoizer = undefined`. The `Memoizer.call` function returns a cached value, or calls the memoized function with the `currentMemoizer` set to the memoizer. Since the ref counter drops to zero after all `withMemoizer` calls resolve, and this resets `currentMemoizer = undefined`; when the memoizer leaves any function context, garbage collection will free it from memory. ### Granular changes - Swap out `memoize` with the new `memoize` call - Add `Memoizer` to `CachedDependenciesLinker` - Hoist file utilities from `src/dependencies`, so the memoized `loadPackageJson` may also be used in `src/reactNativeConfig/androidResolver.ts` - Wrap commands in memoizers ### Notes - Limiting `fs.promises` concurrency doesn't really seem to change the runtime much - On Bluesky (current `main`) the runtime of `verify` drops by ~22% (657->529ms in my testing) # Test Plan To verify that the memoizer is used correctly, a `console.warn` was added when `NODE_ENV === 'test'`. This means, all tests in Jest are verified to be wrapped in memoizer `withMemoizer` context calls. The same has been manually verified for each command. **Note:** At worst, when a memoizer isn't provided, we fall back to calling the function without memoization. - Tests added for `memoizer.ts` - Tests updated to add memoizers where needed - Ran all commands with `NODE_ENV=test` set, to verify they are memoized # Checklist <!-- Please check the appropriate items below if they apply to your diff. --> - [x] I added a `changelog.md` entry and rebuilt the package sources according to [this short guide](https://github.com/expo/expo/blob/main/CONTRIBUTING.md#-before-submitting) - [ ] This diff will work correctly for `npx expo prebuild` & EAS Build (eg: updated a module plugin). - [ ] Conforms with the [Documentation Writing Style Guide](https://github.com/expo/expo/blob/main/guides/Expo%20Documentation%20Writing%20Style%20Guide.md)
# Why Branched off of #42947 This adds code from `@expo/fingerprint` (See: #42487) to add concurrency limits to IO-bound tasks. The amount of tasks we have queued up can put a pressure on the IO-scheduling in Node.js that's slightly undesirable and limiting this seems to yield an improvement. This isn't purely CPU or disk speed bound, but seems to be just the number of IO tasks that are queued up in general, and the amount of work that's leftover for the GC to deal with. # How - Add `concurrency.ts` from `@expo/fingerprint` and add `taskAll` helper - **Note:** This was done because every `Promise.all` that's unbound is a good candidate for conversion - Add a hard-coded limit of `8` to the limiter - **Note:** Explanation in comment; experimentally, this seems to do fine on a few machines. This will likely heavily depend on CPU speed and disk speed (rather than any constant available to us). But `8` seems to do quite well universally. - Add `taskAll` to all `Promise.all` candidates in module resolution and dependency scanning This doesn't limit concurrency globally, to avoid keeping track of concurrent and nested `taskAll` calls. Instead, it'll give a concurrency of `8` to every new task group. This seems to be sufficient to lower autolinking times by up to ~20%-30%. # Test Plan - CI covers this # Checklist <!-- Please check the appropriate items below if they apply to your diff. --> - [x] I added a `changelog.md` entry and rebuilt the package sources according to [this short guide](https://github.com/expo/expo/blob/main/CONTRIBUTING.md#-before-submitting) - [ ] This diff will work correctly for `npx expo prebuild` & EAS Build (eg: updated a module plugin). - [ ] Conforms with the [Documentation Writing Style Guide](https://github.com/expo/expo/blob/main/guides/Expo%20Documentation%20Writing%20Style%20Guide.md)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )